2 research outputs found

    SURFACE NORMAL RECONSTRUCTION USING POLARIZATION-UNET

    Get PDF
    Today, three-dimensional reconstruction of objects has many applications in various fields, and therefore, choosing a suitable method for high resolution three-dimensional reconstruction is an important issue and displaying high-level details in three-dimensional models is a serious challenge in this field. Until now, active methods have been used for high-resolution three-dimensional reconstruction. But the problem of active three-dimensional reconstruction methods is that they require a light source close to the object. Shape from polarization (SfP) is one of the best solutions for high-resolution three-dimensional reconstruction of objects, which is a passive method and does not have the drawbacks of active methods. The changes in polarization of the reflected light from an object can be analyzed by using a polarization camera or locating polarizing filter in front of the digital camera and rotating the filter. Using this information, the surface normal can be reconstructed with high accuracy, which will lead to local reconstruction of the surface details. In this paper, an end-to-end deep learning approach has been presented to produce the surface normal of objects. In this method a benchmark dataset has been used to train the neural network and evaluate the results. The results have been evaluated quantitatively and qualitatively by other methods and under different lighting conditions. The MAE value (Mean-Angular-Error) has been used for results evaluation. The evaluations showed that the proposed method could accurately reconstruct the surface normal of objects with the lowest MAE value which is equal to 18.06 degree on the whole dataset, in comparison to previous physics-based methods which are between 41.44 and 49.03 degree

    Structured Illumination Microscope Image reconstruction using unrolled physics-informed generative adversarial network (UPIGAN)

    No full text
    In medical and microscopy imaging applications where the object is not directly visible, images are never identical to the ground truth. In three-dimensional structured illumination microscopy (3D-SIM), acquired images taken from the object have limited resolution due to the the point spread function (PSF) of the imaging system. Additionally, due to the data acquisition process, images taken under low light and in the presence of electrooptical noise can have a low signal-to-noise ratio as well as suffer from other undesirable aberrations. To obtain a high-resolution restored image, the data must be digitally processed. The inverse imaging problem in 3D-SIM has been solved using various computational imaging techniques. Traditional model-based computational approaches can result in image artifacts due to required, yet not accurately known system parameters. Furthermore, some iterative computational imaging methods can be computationally intensive. Deep learning (DL) approaches, as opposed to traditional image restoration methods, can tackle the issue without access to the analytical model. Although some are effective, they are biased since they do not use the 3D-SIM model. This research aims to provide an unrolled physics-informed (UPI) generative adversarial network (UPIGAN) for reconstructing 3D-SIM images utilizing data samples of mitochondria from a 3D-SIM system. This design uses the benefits of physics knowledge in the unrolling step. Moreover, the GAN employs a Residual Channel Attention super-resolution deep neural network (DNN) in its generator architecture. The results from both a qualitative and quantitative comparison, present a positive impact on the reconstruction when the UPI term is used in the GAN versus using the GAN architecture without it
    corecore